Goto

Collaborating Authors

 kernel parameter



Incorporating Side Information by Adaptive Convolution

Di Kang, Debarun Dhar, Antoni Chan

Neural Information Processing Systems

Computer vision tasks often have side information available that is helpful to solve the task. For example, for crowd counting, the camera perspective (e.g., camera angle and height) gives a clue about the appearance and scale of people in the scene. While side information has been shown to be useful for counting systems using traditional hand-crafted features, it has not been fully utilized in counting systems based on deep learning.


Adaptive Kernel Selection for Stein Variational Gradient Descent

Melcher, Moritz, Weissmann, Simon, Wilson, Ashia C., Zech, Jakob

arXiv.org Machine Learning

A central challenge in Bayesian inference is efficiently approximating posterior distributions. Stein Variational Gradient Descent (SVGD) is a popular variational inference method which transports a set of particles to approximate a target distribution. The SVGD dynamics are governed by a reproducing kernel Hilbert space (RKHS) and are highly sensitive to the choice of the kernel function, which directly influences both convergence and approximation quality. The commonly used median heuristic offers a simple approach for setting kernel bandwidths but lacks flexibility and often performs poorly, particularly in high-dimensional settings. In this work, we propose an alternative strategy for adaptively choosing kernel parameters over an abstract family of kernels. Recent convergence analyses based on the kernelized Stein discrepancy (KSD) suggest that optimizing the kernel parameters by maximizing the KSD can improve performance. Building on this insight, we introduce Adaptive SVGD (Ad-SVGD), a method that alternates between updating the particles via SVGD and adaptively tuning kernel bandwidths through gradient ascent on the KSD. We provide a simplified theoretical analysis that extends existing results on minimizing the KSD for fixed kernels to our adaptive setting, showing convergence properties for the maximal KSD over our kernel class. Our empirical results further support this intuition: Ad-SVGD consistently outperforms standard heuristics in a variety of tasks.


A Additional Background on Bayesian neural networks and variational inference Consider a training set comprising of N input-output pairs, D = { x

Neural Information Processing Systems

Neal, 2012, Blundell et al., 2015], and (iii) using structured variational approximations that can potentially capture weight correlations in the posterior [Louizos and Welling, 2016, Zhang et al., We also vary the amount of inducing points we afford each kernel. The main difference in the local model is the dependence of weights on inputs.


A Dynamic Programs For SSK Evaluations and Gradients We now detail recursive calculation strategies for calculating k n (a, b) and its gradients with O (nl

Neural Information Processing Systems

A recursive strategy is able to efficiently calculate the contributions of particular substring, pre-calculating contributions of the smaller sub-strings contained within the target string. Context-free grammars (CFG) are 4-tuples G = ( V, Σ,R,S), consisting of: a set of non-terminal symbols V, a set of terminal symbols Σ (also known as an alphabet), a set of production rules R, a non-terminal starting symbol S from which all strings are generated. The CFG for the symbolic regression task of Section 5.3 is given by the following rules: S S '+' T S S ' ' T S S '/' T S T T '(' S ')' T ' sin (' S ')' T'exp (' S ')' T'x' T '1' T '2' T '3', We now provide implementation details for our GA acquisition function optimizers. The GA begins with a randomly sampled population and ends once the best string in the population stops improving between iterations (Algorithm 1). Although seemingly simple tasks, our synthetic string optimization tasks of Section 5.1 are deceptively We now provide comprehensive experimental results across the synthetic string optimization tasks.


GPU Performance Portability needs Autotuning

Ringlein, Burkhard, Parnell, Thomas, Stoica, Radu

arXiv.org Artificial Intelligence

--As LLMs grow in complexity, achieving state-of-the-art performance requires tight co-design across algorithms, software, and hardware. T oday's reliance on a single dominant platform limits portability, creates vendor lock-in, and raises barriers for new AI hardware. In this work, we make the case for combining just-in-time (JIT) compilation with comprehensive kernel parameter autotuning to enable portable LLM inference with state-of-the-art performance without code changes. Focusing on performance-critical LLM kernels, we demonstrate that this approach explores up to 15 more kernel parameter configurations, produces significantly more diverse code across multiple dimensions, and even outperforms vendor-optimized implementations by up to 230%, all while reducing kernel code size by 70 and eliminating manual code optimizations. Our results highlight autotuning as a promising path to unlocking model portability across GPU vendors. Large Language Modelss (LLMs) have evolved dramatically in the past years. Besides the improvement in model architectures and training procedures, there have been many innovations in optimizing LLM applications for modern hardware ([1]-[4]).


PyGraph: Robust Compiler Support for CUDA Graphs in PyTorch

Ghosh, Abhishek, Nayak, Ajay, Panwar, Ashish, Basu, Arkaprava

arXiv.org Artificial Intelligence

CUDA Graphs -- a recent hardware feature introduced for NVIDIA GPUs -- aim to reduce CPU launch overhead by capturing and launching a series of GPU tasks (kernels) as a DAG. However, deploying CUDA Graphs faces several challenges today due to the static structure of a graph. It also incurs performance overhead due to data copy. In fact, we show a counter-intuitive result -- deploying CUDA Graphs hurts performance in many cases. We introduce PyGraph, a novel approach to automatically harness the power of CUDA Graphs within PyTorch2. Driven by three key observations, PyGraph embodies three novel optimizations: it enables wider deployment of CUDA Graphs, reduces GPU kernel parameter copy overheads, and selectively deploys CUDA Graphs based on a cost-benefit analysis. PyGraph seamlessly integrates with PyTorch2's compilation toolchain, enabling efficient use of CUDA Graphs without manual modifications to the code. We evaluate PyGraph across various machine learning benchmarks, demonstrating substantial performance improvements over PyTorch2.


Wasserstein-based Kernels for Clustering: Application to Power Distribution Graphs

Oneto, Alfredo, Gjorgiev, Blazhe, Sansavini, Giovanni

arXiv.org Artificial Intelligence

Many data clustering applications must handle objects that cannot be represented as vector data. In this context, the bag-of-vectors representation can be leveraged to describe complex objects through discrete distributions, and the Wasserstein distance can effectively measure the dissimilarity between them. Additionally, kernel methods can be used to embed data into feature spaces that are easier to analyze. Despite significant progress in data clustering, a method that simultaneously accounts for distributional and vectorial dissimilarity measures is still lacking. To tackle this gap, this work explores kernel methods and Wasserstein distance metrics to develop a computationally tractable clustering framework. The compositional properties of kernels allow the simultaneous handling of different metrics, enabling the integration of both vectors and discrete distributions for object representation. This approach is flexible enough to be applied in various domains, such as graph analysis and image processing. The framework consists of three main components. First, we efficiently approximate pairwise Wasserstein distances using multiple reference distributions. Second, we employ kernel functions based on Wasserstein distances and present ways of composing kernels to express different types of information. Finally, we use the kernels to cluster data and evaluate the quality of the results using scalable and distance-agnostic validity indices. A case study involving two datasets of 879 and 34,920 power distribution graphs demonstrates the framework's effectiveness and efficiency.


Kernel Stochastic Configuration Networks for Nonlinear Regression

Chen, Yongxuan, Wang, Dianhui

arXiv.org Artificial Intelligence

Stochastic configuration networks (SCNs), as a class of randomized learner models, are featured by its way of random parameters assignment in the light of a supervisory mechanism, resulting in the universal approximation property at algorithmic level. This paper presents a kernel version of SCNs, termed KSCNs, aiming to enhance model's representation learning capability and performance stability. The random bases of a built SCN model can be used to span a reproducing kernel Hilbert space (RKHS), followed by our proposed algorithm for constructing KSCNs. It is shown that the data distribution in the reconstructive space is favorable for regression solving and the proposed KSCN learner models hold the universal approximation property. Three benchmark datasets including two industrial datasets are used in this study for performance evaluation. Experimental results with comparisons against existing solutions clearly demonstrate that the proposed KSCN remarkably outperforms the original SCNs and some typical kernel methods for resolving nonlinear regression problems in terms of the learning performance, the model's stability and robustness with respect to the kernel parameter settings.


Puppet-CNN: Input-Adaptive Convolutional Neural Networks with Model Compression using Ordinary Differential Equation

Xing, Yucheng, Wang, Xin

arXiv.org Artificial Intelligence

Convolutional Neural Network (CNN) has been applied to more and more scenarios due to its excellent performance in many machine learning tasks, especially with deep and complex structures. However, as the network goes deeper, more parameters need to be stored and optimized. Besides, almost all common CNN models adopt "train-and-use" strategy where the structure is pre-defined and the kernel parameters are fixed after the training with the same structure and set of parameters used for all data without considering the content complexity. In this paper, we propose a new CNN framework, named as $\textit{Puppet-CNN}$, which contains two modules: a $\textit{puppet module}$ and a $\textit{puppeteer module}$. The puppet module is a CNN model used to actually process the input data just like other works, but its depth and kernels are generated by the puppeteer module (realized with Ordinary Differential Equation (ODE)) based on the input complexity each time. By recurrently generating kernel parameters in the puppet module, we can take advantage of the dependence among kernels of different convolutional layers to significantly reduce the size of CNN model by only storing and training the parameters of the much smaller puppeteer ODE module. Through experiments on several datasets, our method has proven to be superior than the traditional CNNs on both performance and efficiency. The model size can be reduced more than 10 times.